Applying compression algorithms on hadoop cluster implementing through apache tez and hadoop mapreduce

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

MapReduce[5] is an emerging programming model that utilizes distributed processing elements (PE) on large datasets. With this model, programmers can write highly parallelized code without explicitly dealing with task scheduling and code parallelism in distributed systems. In this paper, we comparatively evaluate the performance of MapReduce model on Hadoop[2] and on Mars[3]. Hadoop is a softwar...

متن کامل

On the usability of Hadoop MapReduce, Apache Spark & Apache flink for data science

Distributed data processing platforms for cloud computing are important tools for large-scale data analytics. Apache Hadoop MapReduce has become the de facto standard in this space, though its programming interface is relatively low-level, requiring many implementation steps even for simple analysis tasks. This has led to the development of advanced dataflow oriented platforms, most prominently...

متن کامل

Hadoop Mapreduce OpenCL Plugin

Modern systems generates huge amounts of information right from areas like finance, telematics, healthcare, IOT devices to name a few, the modern day computing frameworks like Mapreduce needs an ever increasing amount of computing power to sort, arrange and generate insights from the data. This project is an attempt to harness the power of heterogeneous computing, more specifically take benefit...

متن کامل

Survey on MapReduce and Scheduling Algorithms in Hadoop

We are living in the data world. It is not easy to measure the total volume of data stored electronically. They are in the unit of zettabytes or exabytes referred as Big Data. It can be unstructured, structured or semi structured, they are not convenient to store as well as process with normal data management methods and with machine having limited computational power. Hadoop system is used to ...

متن کامل

Building and Installing a Hadoop/MapReduce Cluster from Commodity Components

This tutorial presents a recipe for the construction of a compute cluster for processing large volumes of data, using cheap, easily available personal computer hardware (Intel/AMD based PCs) and freely available open source software (Ubuntu Linux, Apache Hadoop). Introduction This article describes a straightforward way to build, install and operate a compute cluster from commodity hardware. A ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Engineering & Technology

سال: 2018

ISSN: 2227-524X

DOI: 10.14419/ijet.v7i2.26.12539